11 research outputs found

    Cell Segmentation and Tracking using CNN-Based Distance Predictions and a Graph-Based Matching Strategy

    Get PDF
    The accurate segmentation and tracking of cells in microscopy image sequences is an important task in biomedical research, e.g., for studying the development of tissues, organs or entire organisms. However, the segmentation of touching cells in images with a low signal-to-noise-ratio is still a challenging problem. In this paper, we present a method for the segmentation of touching cells in microscopy images. By using a novel representation of cell borders, inspired by distance maps, our method is capable to utilize not only touching cells but also close cells in the training process. Furthermore, this representation is notably robust to annotation errors and shows promising results for the segmentation of microscopy images containing in the training data underrepresented or not included cell types. For the prediction of the proposed neighbor distances, an adapted U-Net convolutional neural network (CNN) with two decoder paths is used. In addition, we adapt a graph-based cell tracking algorithm to evaluate our proposed method on the task of cell tracking. The adapted tracking algorithm includes a movement estimation in the cost function to re-link tracks with missing segmentation masks over a short sequence of frames. Our combined tracking by detection method has proven its potential in the IEEE ISBI 2020 Cell Tracking Challenge (http://celltrackingchallenge.net/) where we achieved as team KIT-Sch-GE multiple top three rankings including two top performances using a single segmentation model for the diverse data sets.Comment: 25 pages, 14 figures, methods of the team KIT-Sch-GE for the IEEE ISBI 2020 Cell Tracking Challeng

    Improving generative adversarial networks for patch-based unpaired image-to-image translation

    Get PDF
    Deep learning models for image segmentation achieve high-quality results, but need large amounts of training data. Training data is primarily annotated manually, which is time-consuming and often not feasible for large-scale 2D and 3D images. Manual annotation can be reduced using synthetic training data generated by generative adversarial networks that perform unpaired image-to-image translation. As of now, large images need to be processed patch-wise during inference, resulting in local artifacts in border regions after merging the individual patches. To reduce these artifacts, we propose a new method that integrates overlapping patches into the training process. We incorporated our method into CycleGAN and tested it on our new 2D tiling strategy benchmark dataset. The results show that the artifacts are reduced by 85% compared to state-of-the-art weighted tiling. Additionally, we demonstrate transferability to real-world 3D biological image data, receiving a high-quality synthetic dataset. Increasing the quality of synthetic training datasets can reduce manual annotation, increase the quality of model output, and can help develop and evaluate deep learning model

    Synthesis of large scale 3D microscopic images of 3D cell cultures for training and benchmarking

    Get PDF
    The analysis of 3D microscopic cell culture images plays a vital role in the development of new therapeutics. While 3D cell cultures offer a greater similarity to the human organism than adherent cell cultures, they introduce new challenges for automatic evaluation, like increased heterogeneity. Deep learning algorithms are able to outperform conventional analysis methods in such conditions but require a large amount of training data. Due to data size and complexity, the manual annotation of 3D images to generate large datasets is a nearly impossible task. We therefore propose a pipeline that combines conventional simulation methods with deep-learning-based optimization to generate large 3D synthetic images of 3D cell cultures where the labels are known by design. The hybrid procedure helps to keep the generated image structures consistent with the underlying labels. A new approach and an additional measure are introduced to model and evaluate the reduced brightness and quality in deeper image regions. Our analyses show that the deep learning optimization step consistently improves the quality of the generated images. We could also demonstrate that a deep learning segmentation model trained with our synthetic data outperforms a classical segmentation method on real image data. The presented synthesis method allows selecting a segmentation model most suitable for the user’s data, providing an ideal basis for further data analysis

    BeadNet: Deep learning-based bead detection and counting in low-resolution microscopy images

    Get PDF
    Motivation An automated counting of beads is required for many high-throughput experiments such as studying mimicked bacterial invasion processes. However, state-of-the-art algorithms under- or overestimate the number of beads in low-resolution images. In addition, expert knowledge is needed to adjust parameters. Results In combination with our image labeling tool, BeadNet enables biologists to easily annotate and process their data reducing the expertise required in many existing image analysis pipelines. BeadNet outperforms state-of-the-art-algorithms in terms of missing, added and total amount of beads. Availability and implementation BeadNet (software, code and dataset) is available at https://bitbucket.org/t_scherr/beadnet. The image labeling tool is available at https://bitbucket.org/abartschat/imagelabelingtool

    Machine learning methods for automated classification of tumors with papillary thyroid carcinoma-like nuclei : A quantitative analysis

    Get PDF
    When approaching thyroid gland tumor classification, the differentiation between samples with and without “papillary thyroid carcinoma-like” nuclei is a daunting task with high inter-observer variability among pathologists. Thus, there is increasing interest in the use of machine learning approaches to provide pathologists real-time decision support. In this paper, we optimize and quantitatively compare two automated machine learning methods for thyroid gland tumor classification on two datasets to assist pathologists in decision-making regarding these methods and their parameters. The first method is a feature-based classification originating from common image processing and consists of cell nucleus segmentation, feature extraction, and subsequent thyroid gland tumor classification utilizing different classifiers. The second method is a deep learning-based classification which directly classifies the input images with a convolutional neural network without the need for cell nucleus segmentation. On the Tharun and Thompson dataset, the feature-based classification achieves an accuracy of 89.7% (Cohen’s Kappa 0.79), compared to the deep learning-based classification of 89.1% (Cohen’s Kappa 0.78). On the Nikiforov dataset, the feature-based classification achieves an accuracy of 83.5% (Cohen’s Kappa 0.46) compared to the deep learning-based classification 77.4% (Cohen’s Kappa 0.35). Thus, both automated thyroid tumor classification methods can reach the classification level of an expert pathologist. To our knowledge, this is the first study comparing feature-based and deep learning-based classification regarding their ability to classify samples with and without papillary thyroid carcinoma-like nuclei on two large-scale datasets

    CoNIC Challenge: Pushing the Frontiers of Nuclear Detection, Segmentation, Classification and Counting

    Get PDF
    Nuclear detection, segmentation and morphometric profiling are essential in helping us further understand the relationship between histology and patient outcome. To drive innovation in this area, we setup a community-wide challenge using the largest available dataset of its kind to assess nuclear segmentation and cellular composition. Our challenge, named CoNIC, stimulated the development of reproducible algorithms for cellular recognition with real-time result inspection on public leaderboards. We conducted an extensive post-challenge analysis based on the top-performing models using 1,658 whole-slide images of colon tissue. With around 700 million detected nuclei per model, associated features were used for dysplasia grading and survival analysis, where we demonstrated that the challenge's improvement over the previous state-of-the-art led to significant boosts in downstream performance. Our findings also suggest that eosinophils and neutrophils play an important role in the tumour microevironment. We release challenge models and WSI-level results to foster the development of further methods for biomarker discovery

    Unsupervised GAN epoch selection for biomedical data synthesis

    No full text
    Supervised Neural Networks are used for segmentation in many biological and biomedical applications. To omit the time-consuming and tiring process of manual labeling, unsupervised Generative Adversarial Networks (GANs) can be used to synthesize labeled data. However, the training of GANs requires extensive computation and is often unstable. Due to the lack of established stopping criteria, GANs are usually trained multiple times for a heuristically fixed number of epochs. Early stopping and epoch selection can lead to better synthetic datasets resulting in higher downstream segmentation quality on biological or medical data. This article examines whether the Frechet Inception Distance (FID), the Kernel Inception Distance (KID), or the WeightWatcher tool can be used for early stopping or epoch selection of unsupervised GANs. The experiments show that the last trained GAN epoch is not necessarily the best one to synthesize downstream segmentation data. On complex datasets, FID and KID correlate with the downstream segmentation quality, and both can be used for epoch selection

    Synthesis of large scale 3D microscopic images of 3D cell cultures for training and benchmarking.

    No full text
    The analysis of 3D microscopic cell culture images plays a vital role in the development of new therapeutics. While 3D cell cultures offer a greater similarity to the human organism than adherent cell cultures, they introduce new challenges for automatic evaluation, like increased heterogeneity. Deep learning algorithms are able to outperform conventional analysis methods in such conditions but require a large amount of training data. Due to data size and complexity, the manual annotation of 3D images to generate large datasets is a nearly impossible task. We therefore propose a pipeline that combines conventional simulation methods with deep-learning-based optimization to generate large 3D synthetic images of 3D cell cultures where the labels are known by design. The hybrid procedure helps to keep the generated image structures consistent with the underlying labels. A new approach and an additional measure are introduced to model and evaluate the reduced brightness and quality in deeper image regions. Our analyses show that the deep learning optimization step consistently improves the quality of the generated images. We could also demonstrate that a deep learning segmentation model trained with our synthetic data outperforms a classical segmentation method on real image data. The presented synthesis method allows selecting a segmentation model most suitable for the user's data, providing an ideal basis for further data analysis

    BeadNet: deep learning-based bead detection and counting in low-resolution microscopy images

    Get PDF
    Motivation An automated counting of beads is required for many high-throughput experiments such as studying mimicked bacterial invasion processes. However, state-of-the-art algorithms under- or overestimate the number of beads in low-resolution images. In addition, expert knowledge is needed to adjust parameters. Results In combination with our image labeling tool, BeadNet enables biologists to easily annotate and process their data reducing the expertise required in many existing image analysis pipelines. BeadNet outperforms state-of-the-art-algorithms in terms of missing, added and total amount of beads. Availability and implementation BeadNet (software, code and dataset) is available at https://bitbucket.org/t_scherr/beadnet. The image labeling tool is available at https://bitbucket.org/abartschat/imagelabelingtool
    corecore